A New Steepest Descent Differential Inclusion-Based Method for Solving General Nonsmooth Convex Optimization Problems

نویسندگان

  • Alireza Hosseini
  • Seyed Mohammad Hosseini
چکیده

In this paper, we investigate a steepest descent neural network for solving general nonsmooth convex optimization problems. The convergence to optimal solution set is analytically proved. We apply the method to some numerical tests which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

A Method for Solving Convex Quadratic Programming Problems Based on Differential-algebraic equations

In this paper, a new model based on differential-algebraic equations(DAEs) for solving convex quadratic programming(CQP) problems is proposed. It is proved that the new approach is guaranteed to generate optimal solutions for this class of optimization problems. This paper also shows that the conventional interior point methods for solving (CQP) problems can be viewed as a special case of the n...

متن کامل

A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs

The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods are mainly restricted in solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization probl...

متن کامل

Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization

In this paper, we present a new stochastic algorithm, namely the stochastic block mirror descent (SBMD) method for solving large-scale nonsmooth and stochastic optimization problems. The basic idea of this algorithm is to incorporate the block-coordinate decomposition and an incremental block averaging scheme into the classic (stochastic) mirror-descent method, in order to significantly reduce ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Optimization Theory and Applications

دوره 159  شماره 

صفحات  -

تاریخ انتشار 2013